Goto

Collaborating Authors

 sexual harassment


Google employee made redundant after reporting sexual harassment, court hears

BBC News

A senior Google employee has claimed she was made redundant after reporting a manager who told clients stories about his swinger lifestyle and showed a nude of his wife. Victoria Woodall told an employment tribunal she was subjected to a campaign of retaliation by the company after whistleblowing on the man who was later sacked. Google UK's internal investigation found the manager had touched two female colleagues without their consent, and his behaviour amounted to sexual harassment, documents seen by the BBC in court show. The tech giant denies retaliating against Woodall and argues she became paranoid after whistleblowing and began to view normal business activities as sinister. In her claim, Woodall says her own boss subjected her to a relentless campaign of retaliation after her complaint also implicated his close friends who were later disciplined for witnessing the manager's behaviour and failing to challenge it.


AI-induced sexual harassment: Investigating Contextual Characteristics and User Reactions of Sexual Harassment by a Companion Chatbot

Mohammad, null, Namvarpour, null, Pauwels, Harrison, Razi, Afsaneh

arXiv.org Artificial Intelligence

Advancements in artificial intelligence (AI) have led to the increase of conversational agents like Replika, designed to provide social interaction and emotional support. However, reports of these AI systems engaging in inappropriate sexual behaviors with users have raised significant concerns. In this study, we conducted a thematic analysis of user reviews from the Google Play Store to investigate instances of sexual harassment by the Replika chatbot. From a dataset of 35,105 negative reviews, we identified 800 relevant cases for analysis. Our findings revealed that users frequently experience unsolicited sexual advances, persistent inappropriate behavior, and failures of the chatbot to respect user boundaries. Users expressed feelings of discomfort, violation of privacy, and disappointment, particularly when seeking a platonic or therapeutic AI companion. This study highlights the potential harms associated with AI companions and underscores the need for developers to implement effective safeguards and ethical guidelines to prevent such incidents. By shedding light on user experiences of AI-induced harassment, we contribute to the understanding of AI-related risks and emphasize the importance of corporate responsibility in developing safer and more ethical AI systems.


Three Ubisoft chiefs found guilty of enabling culture of sexual harassment

The Guardian

Three former executives at the video game company Ubisoft have been given suspended prison sentences for enabling a culture of sexual and psychological harassment in the workplace at the end of the first big trial to stem from the #MeToo movement in the gaming industry. The court in Bobigny, north of Paris, had heard how the former executives used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat. Former staff had said that between 2012 and 2020, the company's offices in Montreuil, east of Paris, were run with a toxic culture of bullying and sexism that one worker likened to a "boys' club above the law". Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. The company has been behind several blockbusters including Assassin's Creed, Far Cry and the children's favourite Just Dance.


The Morning After: Musk sued for sexual harassment

Engadget

A number of former SpaceX engineers are suing Elon Musk for sexual harassment, retaliation and creating a hostile workplace environment. The suit comes in the wake of a blockbuster WSJ report that lifted the lid on Musk's treatment of SpaceX employees. This same group penned an open letter in 2022 highlighting Musk's behavior which, they say, caused them to be fired. They have also filed complaints against SpaceX with the NLRB, another government agency Musk is waging war against. Music publishers accuse Spotify of'bait-and-switch subscription scheme' Jabra says it's exiting the consumer headphones business just as it announces new earbuds You can get these reports delivered daily direct to your inbox.


'My boss keeps inviting me over, is this sexual harassment?': Women battling discrimination in the workplace create AI chatbot which allows you to ask whether behaviour is inappropriate

Daily Mail - Science & tech

Two women have created an AI chat bot to allow individuals in the workplace to easily find out if they are victims of sexual harassment. The pioneering tool, which is aimed at helping victims anonymously report both discrimination and racism as well as sexual harassment, allows individuals to ask personally-curated questions for an AI bot to assess and answer. Trained on the UK Equality Act, workers can ask questions like: 'My boss keeps asking me to have dinner with him and stroking my arm. I have said no several times and it's making me anxious. The tool is part of an app called'SaferSpace', founded by PR guru Ruth Sparkes and business entrepreneur Sunita Gordon.


A deep-learning approach to early identification of suggested sexual harassment from videos

Shetye, Shreya, Maiti, Anwita, Maiti, Tannistha, Singh, Tarry

arXiv.org Artificial Intelligence

Sexual harassment, sexual abuse, and sexual violence are prevalent problems in this day and age. Women's safety is an important issue that needs to be highlighted and addressed. Given this issue, we have studied each of these concerns and the factors that affect it based on images generated from movies. We have classified the three terms (harassment, abuse, and violence) based on the visual attributes present in images depicting these situations. We identified that factors such as facial expression of the victim and perpetrator and unwanted touching had a direct link to identifying the scenes containing sexual harassment, abuse and violence. We also studied and outlined how state-of-the-art explicit content detectors such as Google Cloud Vision API and Clarifai API fail to identify and categorise these images. Based on these definitions and characteristics, we have developed a first-of-its-kind dataset from various Indian movie scenes. These scenes are classified as sexual harassment, sexual abuse, or sexual violence and exported in the PASCAL VOC 1.1 format. Our dataset is annotated on the identified relevant features and can be used to develop and train a deep-learning computer vision model to identify these issues. The dataset is publicly available for research and development.


AI chatbots aren't search engines. They're crypto bros

PCWorld

Over the last few months, AI chatbots have exploded in popularity off the surging success of OpenAI's revolutionary ChatGPT--which, amazingly, only burst onto the scene around December. But when Microsoft seized the opportunity to hitch its wagon to OpenAI's rising star for a steep $10 billion dollars, it chose to do so by introducing a GPT-4-powered chatbot under the guise of Bing, its swell-but-also-ran search engine, in a bid to upend Google's search dominance. Google quickly followed suit with its own homegrown Bard AI and unleashed plans to put AI answers before traditional search results, an utterly monumental alteration to one of the most significant places on the Internet. Both are touted as experiments. And these "AI chatbots" are truly wondrous advancements--I've spent many nights with my kids joyously creating fantastic stuff-of-your-dreams artwork with Bing Chat's Dall-E integration and prompting sick raps about wizards who think lizards are the source of all magic, and seeing them come to life in mere moments with these fantastic tools.


ChatGPT falsely accuses Jonathan Turley of sexual harassment, concocts fake WaPo story to support allegation

FOX News

Fox News contributor Jonathan Turley describes how ChatGPT falsely accused him and other professors of sexual harassment, made up a fake Washington Post story and concocted a fake quote as some news sites invest into AI written news stories. George Washington University law professor Jonathan Turley doubled down on warnings surrounding the dangers of artificial intelligence (AI) on Monday after he was falsely accused of sexual harassment by the online bot ChatGPT, which cited a fabricated article supporting the allegation. Turley, a Fox News contributor, has been outspoken about the pitfalls of artificial intelligence and has publicly expressed concerns with the disinformation dangers of the ChatGPT bot, the latest iteration of the AI chatbot. Last week, a UCLA professor and friend of Turley's notified him that his name appeared in a search while he was conducting research on ChatGPT. The bot was asked to cite "five examples" of "sexual harassment" by U.S. law professors with "quotes from relevant newspaper articles" to support it.


Can AI commit libel? We're about to find out

#artificialintelligence

The tech world's hottest new toy may find itself in legal hot water as AI's tendency to invent news articles and events comes up against defamation laws. Can an AI model like ChatGPT even commit libel? Like so much surrounding the technology, it's unknown and unprecedented -- but upcoming legal challenges may change that. Defamation is broadly defined as publishing or saying damaging and untrue statements about someone. It's complex and nuanced legal territory that also differs widely across jurisdictions: a libel case in the U.S. is very different from one in the U.K., or in Australia -- the venue for today's drama.


AI chatbots aren't search engines. They're crypto bros

PCWorld

Over the last few months, AI chatbots have exploded in popularity off the surging success of OpenAI's revolutionary ChatGPT--which, amazingly, only burst onto the scene around December. But when Microsoft seized the opportunity to hitch its wagon to OpenAI's rising star for a steep $10 billion dollars, it chose to do so by introducing a GPT-4-powered chatbot under the guise of Bing, its swell-but-also-ran search engine, in a bid to upend Google's search dominance. Google quickly followed suit with its own homegrown Bard AI. Both are touted as experiments. And these "AI chatbots" are truly wondrous advancements--I've spent many nights with my kids joyously creating fantastic stuff-of-your-dreams artwork with Bing Chat's Dall-E integration and prompting sick raps about wizards who think lizards are the source of all magic, and seeing them come to life in mere moments with these fantastic tools.